Columbia University/VIREO-CityU/IRIT TRECVID2008 High-Level Feature Extraction and Interactive Video Search

نویسندگان

  • Shih-Fu Chang
  • Junfeng He
  • Yu-Gang Jiang
  • Akira Yanagawa
  • Eric Zavesky
  • Elie el Khoury
  • Chong-Wah Ngo
چکیده

! A_CU-run6: local feature alone – average fusion of 3 SVM classification results for each concept using various feature representation choices. ! A_CU-run5: linear weighted fusion of A_CU-run6 with two grid-based global features (color moment and wavelet texture). ! A_CU-run4: linear weighted fusion of A_CU-run5 with a SVM classification result using detection scores of CU-VIREO374 as features. ! C_CU-run3: linear weighted fusion of A_CU-run4 with a SVM classification result using web images. ! A_CU-run2: re-rank the results of “two_people” and “singing” from A_CU-run4 with concept-specific detectors. ! C_CU-run1: linear weighted fusion of A_CU-run2 with a SVM classification result using web images.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Beyond Semantic Search: What You Observe May Not Be What You Think

This paper presents our approaches and results of the four TRECVID 2008 tasks we participated in: high-level feature extraction, automatic video search, video copy detection, and rushes summarization. In high-level feature extraction, we jointly submitted our results with Columbia University. The four runs submitted through CityU aim to explore context-based concept fusion by modeling inter-con...

متن کامل

Columbia University/VIREO-CityU/IRIT

 A_CU-run6: local feature alone – average fusion of 3 SVM classification results for each concept using various feature representation choices.  A_CU-run5: linear weighted fusion of A_CU-run6 with two grid-based global features (color moment and wavelet texture).  A_CU-run4: linear weighted fusion of A_CU-run5 with a SVM classification result using detection scores of CU-VIREO374 as features...

متن کامل

Modeling Local Interest Points for Semantic Detection and Video Search at TRECVID 2006

Local interest points (LIPs) and their features have been shown to obtain surprisingly good results in object detection and recognition. Its effectiveness and scalability, however, have not been seriously addressed in large-scale multimedia database, for instance TRECVID benchmark. The goal of our works is to investigate the role and performance of LIPs, when coupling with multi-modality featur...

متن کامل

Experimenting VIREO-374: Bag-of-Visual-Words and Visual-Based Ontology for Semantic Video Indexing and search

In this paper, we present our approaches and results of high-level feature extraction and automatic video search in TRECVID-2007. In high-level feature extraction, our main focus is to explore the upper limit of bag-of-visualwords (BoW) approach based upon local appearance features. We study and evaluate several factors which could impact the performance of BoW. By considering these important f...

متن کامل

Zhejiang University at TRECVID 2006

We participated in the high-level feature extraction and interactive-search task for TRECVID 2006. Interaction and integration of multi-modality media types such as visual, audio and textual data in video are the essence of video content analysis. Although any uni-modality type partially expresses limited semantics less or more, video semantics are fully manifested only by interaction and integ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2008